我们使用最大平均差异(MMD),Hilbert Schmidt独立标准(HSIC)和内核Stein差异(KSD),,提出了一系列针对两样本,独立性和合适性问题的计算效率,非参数测试,用于两样本,独立性和合适性问题。分别。我们的测试统计数据是不完整的$ u $统计信息,其计算成本与与经典$ u $ u $统计测试相关的样本数量和二次时间之间的线性时间之间的插值。这三个提出的测试在几个内核带宽上汇总,以检测各种尺度的零件:我们称之为结果测试mmdagginc,hsicagginc和ksdagginc。对于测试阈值,我们得出了一个针对野生引导不完整的$ U $ - 统计数据的分位数,该统计是独立的。我们得出了MMDagginc和Hsicagginc的均匀分离率,并准确量化了计算效率和可实现速率之间的权衡:据我们所知,该结果是基于不完整的$ U $统计学的测试新颖的。我们进一步表明,在二次时间案例中,野生引导程序不会对基于更广泛的基于置换的方法进行测试功率,因为​​两者都达到了相同的最小最佳速率(这反过来又与使用Oracle分位数的速率相匹配)。我们通过数值实验对计算效率和测试能力之间的权衡进行数字实验来支持我们的主张。在三个测试框架中,我们观察到我们提出的线性时间聚合测试获得的功率高于当前最新线性时间内核测试。
translated by 谷歌翻译
我们研究了基于内核Stein差异(KSD)的合适性测试的特性。我们介绍了一种构建一个名为KSDAGG的测试的策略,该测试与不同的核聚集了多个测试。 KSDAGG避免将数据分开以执行内核选择(这会导致测试能力损失),并最大程度地提高了核集合的测试功率。我们提供有关KSDAGG的力量的理论保证:我们证明它达到了收集最小的分离率,直到对数期限。可以在实践中准确计算KSDAGG,因为它依赖于参数bootstrap或野生引导程序来估计分位数和级别校正。特别是,对于固定核的带宽至关重要的选择,它避免了诉诸于任意启发式方法(例如中位数或标准偏差)或数据拆分。我们在合成数据和现实世界中发现KSDAGG优于其他基于自适应KSD的拟合优度测试程序。
translated by 谷歌翻译
间接飞行时间(I-TOF)成像是由于其小尺寸和价格合理的价格导致移动设备的深度估计方式。以前的作品主要专注于I-TOF成像的质量改进,特别是固化多路径干扰(MPI)的效果。这些调查通常在特定约束的场景中进行,在近距离,室内和小环境光下。令人惊讶的一点工作已经调查了现实生活场景的I-TOF质量改善,其中强烈的环境光线和远距离由于具有限制传感器功率和光散射而导致的诱导射击噪声和信号稀疏引起的困难。在这项工作中,我们提出了一种基于新的学习的端到端深度预测网络,其噪声原始I-TOF信号以及RGB图像基于涉及隐式和显式对齐的多步方法来解决它们的潜在表示。预测与RGB视点对齐的高质量远程深度图。与基线方法相比,我们在挑战真实世界场景中测试了挑战性质场景的方法,并在最终深度地图上显示了超过40%的RMSE改进。
translated by 谷歌翻译
我们提出了一种基于最大平均差异(MMD)的新型非参数两样本测试,该测试是通过具有不同核带宽的聚合测试来构建的。这种称为MMDAGG的聚合过程可确保对所使用的内核的收集最大化测试能力,而无需持有核心选择的数据(这会导致测试能力损失)或任意内核选择,例如中位数启发式。我们在非反应框架中工作,并证明我们的聚集测试对Sobolev球具有最小自适应性。我们的保证不仅限于特定的内核,而是符合绝对可集成的一维翻译不变特性内核的任何产品。此外,我们的结果适用于流行的数值程序来确定测试阈值,即排列和野生引导程序。通过对合成数据集和现实世界数据集的数值实验,我们证明了MMDAGG优于MMD内核适应的替代方法,用于两样本测试。
translated by 谷歌翻译
亲属性验证是在两个人之间确定父子,兄弟姐妹或祖父母的关系,在社交媒体应用,法医调查,发现失踪的儿童和团聚家庭中都很重要。我们通过参加2021年在野外挑战中识别2021家庭来展示高质量的亲属验证,该家庭提供了该领域中最大的公共数据集。我们的方法是竞争中的前三名获奖条目之一。我们的专家和基础模型,Openai Codex撰写的模拟模型,培训了文本和代码。我们使用Codex来生成模型变体,并且还展示其能够生成特定关系的亲属验证任务的整个运行程序。
translated by 谷歌翻译
多代理行为建模旨在了解代理之间发生的交互。我们从行为神经科学,Caltech鼠标社交交互(CALMS21)数据集中提供了一个多代理数据集。我们的数据集由社交交互的轨迹数据组成,从标准居民入侵者测定中自由行为小鼠的视频记录。为了帮助加速行为研究,CALMS21数据集提供基准,以评估三种设置中自动行为分类方法的性能:(1)用于培训由单个注释器的所有注释,(2)用于风格转移以进行学习互动在特定有限培训数据的新行为学习的行为定义和(3)的注释差异。 DataSet由600万个未标记的追踪姿势的交互小鼠组成,以及超过100万帧,具有跟踪的姿势和相应的帧级行为注释。我们的数据集的挑战是能够使用标记和未标记的跟踪数据准确地对行为进行分类,以及能够概括新设置。
translated by 谷歌翻译
本文提出了针对回顾性神经网络(Badnets)的新型两级防御(NNOCULICULE),该案例在响应该字段中遇到的回溯测试输入,修复了预部署和在线的BADNET。在预部署阶段,NNICULICULE与清洁验证输入的随机扰动进行检测,以部分减少后门的对抗影响。部署后,NNOCULICULE通过在原始和预先部署修补网络之间录制分歧来检测和隔离测试输入。然后培训Constcan以学习清洁验证和隔离输入之间的转换;即,它学会添加触发器来清洁验证图像。回顾验证图像以及其正确的标签用于进一步重新培训预修补程序,产生我们的最终防御。关于全面的后门攻击套件的实证评估表明,NNOCLICULE优于所有最先进的防御,以制定限制性假设,并且仅在特定的后门攻击上工作,或者在适应性攻击中失败。相比之下,NNICULICULE使得最小的假设并提供有效的防御,即使在现有防御因攻击者而导致其限制假设而导致的现有防御无效的情况下。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We employ self-supervised representation learning via a training strategy that adapts off-the-shelf video features using a temporal module. Training implements self-supervised learning losses involving multiple cues such as appearance, motion and pose trajectories extracted from videos to learn generalizable representations. Our method extracts key steps via a tunable algorithm that clusters the representations extracted from procedural videos. We quantitatively evaluate our approach with key step localization and also demonstrate the effectiveness of the extracted representations on related downstream tasks like phase classification. Qualitative results demonstrate that the extracted key steps are meaningful to succinctly represent the procedural tasks.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译